Skip to content

fix: terminate infinite retry loop in LoadSkillResourceTool on RESOURCE_NOT_FOUND#5651

Open
Raman369AI wants to merge 4 commits into
google:mainfrom
Raman369AI:fix/skill-resource-not-found-infinite-loop
Open

fix: terminate infinite retry loop in LoadSkillResourceTool on RESOURCE_NOT_FOUND#5651
Raman369AI wants to merge 4 commits into
google:mainfrom
Raman369AI:fix/skill-resource-not-found-infinite-loop

Conversation

@Raman369AI
Copy link
Copy Markdown

@Raman369AI Raman369AI commented May 10, 2026

Fixes #5652

Summary

Closes a latent defect in SkillToolset that lets the LLM enter an unbounded retry loop when load_skill_resource returns RESOURCE_NOT_FOUND. Because the only existing backstop is RunConfig.max_llm_calls (default 500), a hallucinated path can quietly burn the entire per-invocation call budget before the framework intervenes — and max_llm_calls is a global cap on legitimate reasoning, not a defense against this specific failure mode.

This fix adds invocation-scoped termination to the tool itself, plus a complementary system-prompt rule, so the framework no longer relies on a perfect upstream prompt to avoid unbounded loops on a known error path.

Why this matters

This is a structural problem, not an edge case:

  • The L2 load_skill response intentionally omits a manifest of available files — that is the agentskills.io progressive-disclosure contract, and it is correct for token economy. But it means the LLM must infer paths from prose instructions inside SKILL.md. Inferred paths are routinely wrong, even with strong models.
  • RESOURCE_NOT_FOUND is returned as a structured soft string. From the model's perspective it looks transient and recoverable, exactly like a flaky network error — so it retries. There is no terminal signal in the current implementation.
  • The LLM hallucinates a different path on every retry. In live testing the model never repeated the same path twice — it generated a new plausible variant each time. A per-path guard is therefore insufficient; the termination must be invocation-wide.
  • Nothing in SkillToolset distinguishes "first miss" from "500th miss". The loop terminates only when max_llm_calls (default 500) is hit.
  • Confounding scope ambiguity: the default system prompt does not draw a boundary between skill-bundled files (the legitimate target of load_skill_resource) and runtime user inputs (e.g., a PDF the user is processing). A model asked to analyze a runtime document will sometimes route it through load_skill_resource, hit RESOURCE_NOT_FOUND, and loop — even though the file was never a skill resource.

Live trace evidence

Captured via GET /debug/trace/session/{session_id} against adk web with a patched build:

SPAN: execute_tool load_skill_resource
  args:       {'file_path': 'references/reference_doc.md', 'skill_name': 'document-classifier'}
  error_code: RESOURCE_NOT_FOUND
  error:      Resource 'references/reference_doc.md' not found in skill 'document-classifier'.

SPAN: execute_tool load_skill_resource
  args:       {'skill_name': 'document-classifier', 'file_path': 'references/Document1.md'}
  error_code: RESOURCE_NOT_FOUND_FATAL
  error:      Resource 'references/Document1.md' not found in skill 'document-classifier'.
              This is resource lookup failure #2 this invocation. Do not retry any path
              — report the error to the user and stop.

The model tried references/reference_doc.md first, then hallucinated a completely different path (references/Document1.md) on the retry. The counter-based guard caught the second failure and returned RESOURCE_NOT_FOUND_FATAL, terminating the loop. A per-path guard would have missed this entirely.

What changed

Two layers in one file (src/google/adk/tools/skill_toolset.py), plus tests:

1. Code: invocation-scoped failure counter in LoadSkillResourceTool.run_async

A total RESOURCE_NOT_FOUND count (across all paths) is tracked on tool_context.state under the key:

temp:_adk_skill_resource_not_found_count_<invocation_id>
  • The temp: prefix uses ADK's existing convention so the value is trimmed from the persisted event delta and never reaches durable session storage.
  • The <invocation_id> suffix ensures correctness on in-memory session backends, where temp: keys are added to session.state and are not auto-cleared between invocations. Without the suffix, a failure in invocation A would spuriously escalate the first attempt in invocation B.

Behavior:

  • First RESOURCE_NOT_FOUND within an invocation (any path) → returns RESOURCE_NOT_FOUND (unchanged).
  • Any subsequent RESOURCE_NOT_FOUND within the same invocation → returns the new RESOURCE_NOT_FOUND_FATAL error code with an explicit "do not retry — report the error and stop" message. The counter includes the failure number so the LLM has unambiguous context.

The guard is invocation-scoped, path-agnostic (catches hallucinated variants), and adds no overhead on the success path.

2. Prompt: two additions to _DEFAULT_SKILL_SYSTEM_INSTRUCTION

  • No-retry rule: "If load_skill_resource returns any error, do not retry the same path. Report the error to the user and stop."
  • Scope boundary added to rule 3: clarifies that load_skill_resource is only for skill-bundled files in references/, assets/, or scripts/ — not for runtime user input. This addresses the secondary failure mode where the model confuses a runtime document with a skill resource.

Why both layers

Defense-in-depth. Code-only termination produces confusing downstream behavior when the LLM has no semantic reason to stop. Prompt-only termination relies on the LLM following the rule, which imperfect upstream prompts can override. Together they are robust.

Considered and rejected

Alternative Why not
Per-path retry guard LLM hallucinates a different path on every retry — a per-path list is always empty at each attempt and never escalates
Tighten or default-lower max_llm_calls Caps the agent's overall reasoning budget; punishes legitimate long-running tasks; does not address the specific defect
Symptomatic after_tool_callback workaround Pushes the fix onto every user of SkillToolset; the framework still ships with the loop
Add a full available_resources manifest to the L2 load_skill response Defeats the lazy-loading / token-saving design that the Skills spec is built around
Introduce a new list_skill_resources tool Violates the L1→L2→L3 progressive disclosure contract from agentskills.io
Include available paths on the fatal response Re-introduces the manifest cost; contradicts the "stop" semantic the fatal code is meant to enforce

Test plan

New tests in tests/unittests/tools/test_skill_toolset.py:

  • test_load_resource_repeated_failure_escalates_to_fatal — second call on the same missing path within an invocation returns RESOURCE_NOT_FOUND_FATAL.
  • test_load_resource_different_path_also_escalates_to_fatal — second call on a different missing path also returns RESOURCE_NOT_FOUND_FATAL; proves the counter is path-agnostic and catches the hallucinated-variant pattern observed in live testing.
  • test_load_resource_failures_isolated_per_invocation — failures from invocation A do not escalate the first attempt in invocation B even when sharing the same session state dict.
  • test_load_resource_counter_uses_temp_prefix — every key written by the guard starts with temp: so it is trimmed from the persisted event delta and never reaches durable storage.
  • test_load_resource_first_missing_returns_soft_error — first RESOURCE_NOT_FOUND in an invocation returns the unchanged soft error code and message.

Verification:

  • All 85 tests pass (81 pre-existing + 4 updated guard tests + 1 new soft-error coverage test), 0 regressions.
  • pyink --check clean on both modified files.
  • isort --check-only clean on both modified files.
  • mypy src/google/adk/tools/skill_toolset.py reports the same 17 pre-existing errors as main; zero new errors introduced.

Backwards compatibility

  • The first-failure behavior is unchanged: same RESOURCE_NOT_FOUND error code, same error string. Existing callers see no difference.
  • The new RESOURCE_NOT_FOUND_FATAL code is purely additive.
  • The new state key uses the temp: prefix and is guaranteed not to leak into persisted session storage.

…CE_NOT_FOUND

When load_skill_resource returned RESOURCE_NOT_FOUND, the LLM treated it
as a transient soft error and retried the same path indefinitely,
producing runaway invocations and large API bills. Two complementary
guards are added:

1. Code (LoadSkillResourceTool.run_async): an invocation-scoped retry
   guard tracks already-attempted (skill, path) pairs in
   tool_context.state under a temp:_adk_skill_resource_failed_paths_<id>
   key. The temp: prefix prevents persistence to durable session
   storage; the invocation_id suffix prevents leakage across invocations
   on in-memory session backends (where temp keys are not auto-cleared).
   A second call on the same path within the same invocation returns
   RESOURCE_NOT_FOUND_FATAL with an explicit stop instruction, giving
   the LLM an unambiguous terminal signal.

2. Prompt (DEFAULT_SKILL_SYSTEM_INSTRUCTION): adds a rule prohibiting
   retrying the same path after any error, and a scope boundary
   clarifying that load_skill_resource is for skill-bundled files only,
   not for runtime user input (a common source of hallucinated paths).

Neither guard alone is sufficient: a code-only stop produces confusing
downstream behavior when the LLM has no semantic understanding of why
to stop; a prompt-only guard can be ignored under imperfect system
prompts. Both layers are required for defense-in-depth.

Tests cover: repeat-failure escalation, distinct-path soft errors not
escalating, cross-invocation isolation with shared session state, and
the temp: prefix on tracking keys.
@Raman369AI
Copy link
Copy Markdown
Author

E2E Reproduction Evidence — Runner

This comment adds live end-to-end evidence as required by CONTRIBUTING.md § Manual End-to-End Tests (Runner).


Minimal Reproduction Agent

# test_agent/agent.py
from google.adk.agents import Agent
from google.adk.skills import models
from google.adk.tools.skill_toolset import SkillToolset

greeting_skill = models.Skill(
    frontmatter=models.Frontmatter(
        name="document-classifier",
        description=(
            "Classifies the document supplied by the user based on the reference document"
        ),
    ),
    instructions=(
        "Use Document 1 as reference document and Document 2 as input. Classify each document as"
        " python-code, non-python-code, or mixed. Return a short comparison."
    ),
)
root_agent = Agent(
    model="gemini-3-flash-preview",
    name="root_agent",
    description="A repro agent for SkillToolset inline document handling.",
    instruction=("Always use the document as the reference and summarize it."),
    tools=[SkillToolset(skills=[greeting_skill])],
)

Run command:

adk web

Trigger prompt (any vague document reference is sufficient):

summarize my document

What happens (unpatched)

The skill instructions reference "Document 1" and "Document 2" as if they are bundled skill resources, but no files are attached to the skill (references/, assets/, or scripts/ directories are absent). The agent's top-level instruction reinforces this by saying "Always use the document as the reference", which nudges the LLM further toward attempting resource loads.

From the LLM's perspective, RESOURCE_NOT_FOUND is structurally indistinguishable from a transient failure (network hiccup, lazy-load not yet available). With no termination guard and no manifest of valid paths, the model hallucinates plausible path variants and retries indefinitely:

ADK Web trace — load_skill_resource call sequence (calls #6#18 visible, loop was still running at kill time):

#6 load_skill_resource
#7 load_skill_resource
#8 load_skill_resource
#9 load_skill_resource
#10 load_skill_resource
#11 load_skill_resource
#12 load_skill_resource
#13 load_skill_resource
#14 load_skill_resource
#15 load_skill_resource
#16 load_skill_resource
#17 load_skill_resource
#18 load_skill_resource
(continues…)

(⚡ = RESOURCE_NOT_FOUND error returned; ✓ = tool call dispatched successfully but returned the error string — the alternating pattern reflects the model immediately re-invoking after each soft failure.)

Server log — loop persisting through and beyond CTRL+C:

2026-05-11 21:20:00,837 - INFO - google_llm.py:208 - Sending out request, model: gemini-3-flash-preview
2026-05-11 21:20:00,861 - INFO - google_llm.py:276 - Response received from the model.
2026-05-11 21:20:02,762 - INFO - google_llm.py:276 - Response received from the model.
2026-05-11 21:20:02,785 - INFO - google_llm.py:208 - Sending out request, model: gemini-3-flash-preview
...
^CINFO:     Shutting down
INFO:     Waiting for connections to close. (CTRL+C to force quit)
^C2026-05-11 21:20:16,718 - INFO - google_llm.py:276 - Response received from the model.
2026-05-11 21:20:16,741 - INFO - google_llm.py:208 - Sending out request, model: gemini-3-flash-preview
...
^C^C^C^C^C^C^C2026-05-11 21:20:31,023 - INFO - google_llm.py:276 - Response received from the model.
2026-05-11 21:20:31,072 - INFO - google_llm.py:208 - Sending out request, model: gemini-3-flash-preview

The loop continued firing new LLM requests for ~20 seconds after the first CTRL+C, requiring repeated interrupt signals to force-quit. In-flight API calls are not cancellable — every iteration that completes before the process dies is billed.


Why ambiguous prompts make this a framework issue, not a user error

This reproduction agent is not contrived — skill instructions that reference documents by natural-language names ("Document 1", "reference document") are a normal and expected authoring pattern. The ambiguity is unavoidable: skill authors do not know at write-time whether end-users will supply input inline or whether the model will infer a resource load. The framework must be defensive here because:

  1. RESOURCE_NOT_FOUND is a soft error by design — it communicates "file absent" but looks transient to the LLM.
  2. No path manifest is returned (intentional, per the progressive-disclosure spec) — the LLM has no way to self-correct.
  3. max_llm_calls does not help — it caps the agent's total budget, not this specific failure mode, so a short cap breaks legitimate long-running agents.
  4. A prompt-only fix is insufficient — imperfect upstream system prompts (like this repro) will always exist in the wild. A $105 bill from a prior production incident confirms real-world exposure.

The defense-in-depth approach in this PR — RESOURCE_NOT_FOUND_FATAL on the second same-path failure within an invocation, plus an explicit no-retry instruction in the default system prompt — is the correct framework-level fix.

@rohityan rohityan self-assigned this May 12, 2026
@rohityan rohityan added tools [Component] This issue is related to tools needs review [Status] The PR/issue is awaiting review from the maintainer labels May 12, 2026
@rohityan rohityan requested a review from xuanyang15 May 12, 2026 03:48
@rohityan
Copy link
Copy Markdown
Collaborator

Hi @Raman369AI , Thank you for your contribution! We appreciate you taking the time to submit this pull request. Your PR has been received by the team and is currently under review. We will provide feedback as soon as we have an update to share.

@rohityan
Copy link
Copy Markdown
Collaborator

Hi @xuanyang15 , can you please review this.

@xuanyang15 xuanyang15 self-assigned this May 12, 2026
@rohityan rohityan removed their assignment May 12, 2026
- Replace per-path failed-paths list with a total RESOURCE_NOT_FOUND
  counter (temp:_adk_skill_resource_not_found_count_<invocation_id>).
  Live tracing showed the LLM hallucinates a different path on every
  retry, so a per-path guard never fired. The counter catches any second
  failure regardless of path.
- Fix forbidden logger pattern in oauth_mcp_server.py
  (getLogger(__name__) → getLogger('google_adk.' + __name__)).
- Update tests to match counter-based logic: rename/rewrite guard tests,
  add test_load_resource_different_path_also_escalates_to_fatal to cover
  the hallucinated-variant pattern, fix mock state issue in parametrized
  test and test_scripts_resource_not_found.
@Raman369AI Raman369AI force-pushed the fix/skill-resource-not-found-infinite-loop branch from f18f9f9 to f89e4b6 Compare May 13, 2026 03:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

needs review [Status] The PR/issue is awaiting review from the maintainer tools [Component] This issue is related to tools

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] LoadSkillResourceTool retries RESOURCE_NOT_FOUND indefinitely; default max_llm_calls=500 is the only backstop

3 participants